Direction-aggregated Attack for Transferable Adversarial Examples

نویسندگان

چکیده

Deep neural networks are vulnerable to adversarial examples that crafted by imposing imperceptible changes the inputs. However, these most successful in white-box settings where model and its parameters available. Finding transferable other models or developed a black-box setting is significantly more difficult. In this article, we propose Direction-aggregated attacks deliver examples. Our method utilizes aggregated direction during attack process for avoiding generated overfitting model. Extensive experiments on ImageNet show our proposed improves transferability of outperforms state-of-the-art attacks, especially against trained models. The best averaged success rate reaches 94.6% three 94.8% five defense methods. It also reveals current approaches do not prevent attacks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Space of Transferable Adversarial Examples

Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time. Adversarial examples are known to transfer across models: a same perturbed input is often misclassified by different models despite being generated to mislead a specific architecture. This phenomenon enables simple yet powerful black-box attacks against deployed ML systems. In th...

متن کامل

Delving into Transferable Adversarial Examples and Black-box Attacks

An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferabilit...

متن کامل

Adversarial examples for generative models

We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model...

متن کامل

Adversarial Examples for Malware Detection

Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by introducing minor—yet carefully selected—perturbations. In this work, we expand on existing adversarial example crafting algorithms to construct a highly-effective attack that uses adversarial examples against malware detecti...

متن کامل

Adversarial Examples for Visual Decompilers

Adversarial Examples for Visual Decompilers

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ACM Journal on Emerging Technologies in Computing Systems

سال: 2022

ISSN: ['1550-4832', '1550-4840']

DOI: https://doi.org/10.1145/3501769